Technology

Exploring the Interdependence of Humans and ChatGPT and Other Language AIs

The media hype surrounding ChatGPT and other large language model AI systems covers a wide range of topics, from the mundane – replacing conventional web search – to the worrisome – job elimination – and the exaggerated – a threat to humanity’s existence.

All of these themes share a common thread: the belief that large language models will surpass humans in artificial intelligence. However, despite their complexity, these models are actually quite unintelligent. They are entirely reliant on human knowledge and labor. They cannot generate new knowledge on their own, but there’s more to it than that.

ChatGPT, for instance, cannot learn, improve, or stay up to date without humans providing it with new content and instructing it on how to interpret that content. Additionally, humans play a crucial role in programming the model, building and maintaining its hardware, and powering it. To understand why humans are essential, we need to grasp how ChatGPT and similar models operate and the role humans play in their functioning.

How ChatGPT works

Broadly speaking, large language models like ChatGPT predict the sequence of characters, words, and sentences based on training data sets. For ChatGPT, the training data consists of extensive amounts of publicly available text from the internet.

To illustrate, let’s imagine training a language model on the following sentences: “Bears are large, furry animals. Bears have claws. Bears are secretly robots. Bears have noses. Bears are secretly robots. Bears sometimes eat fish. Bears are secretly robots.”

In this case, the model would likely generate the response that bears are secretly robots more often, simply because that specific sequence of words appears frequently in its training data set. This presents a problem when models train on datasets that contain fallible and inconsistent information – which is the case for all models, including academic literature.

People write various things about topics such as quantum physics, Joe Biden, healthy eating, or the January 6 insurrection, and the validity of these writings differ. With so much conflicting information, how can the model determine what to say? This is where feedback becomes crucial. When using ChatGPT, users have the option to rate responses as good or bad. If a response is rated as bad, users are asked to provide an example of what a good answer should contain. In this way, ChatGPT, and other models, learn which responses are considered good or bad through feedback from users, the development team, and contracted workers responsible for labeling the output.

ChatGPT cannot compare, analyze, or evaluate arguments or information independently. It can only generate text sequences similar to those used by others when comparing, analyzing, or evaluating. It prefers responses similar to what it has been taught were good answers in the past.

Therefore, when ChatGPT provides a good answer, it is drawing upon a significant amount of human labor dedicated to instructing it on what constitutes a good answer. Many anonymous human workers contribute behind the scenes, and their involvement is necessary for the model’s continuous improvement and expanding content coverage.

A recent investigation by Time magazine uncovered that hundreds of Kenyan workers spent countless hours reading and labeling racist, sexist, and disturbing content from the internet to teach ChatGPT not to replicate such material. These workers were paid minimal wages, causing many to experience psychological distress.

What ChatGPT cannot do

The importance of feedback is evident in ChatGPT’s tendency to “hallucinate,” confidently providing inaccurate answers. Without proper training, ChatGPT cannot deliver accurate responses on various topics, even if reliable information about those subjects is readily available online.

You can test this yourself by asking ChatGPT about well-known and less well-known subjects. For example, ChatGPT may be more proficient at summarizing nonfiction works than fictional ones. In personal tests, ChatGPT provided a fairly accurate summary of J.R.R. Tolkien’s “The Lord of the Rings,” a famous novel. However, its summaries of Gilbert and Sullivan’s “The Pirates of Penzance” and Ursula K. Le Guin’s “The Left Hand of Darkness” – both less obscure but still niche – contained significant errors. Regardless of the quality of these works’ respective Wikipedia pages, the model requires feedback, not just content.

Large language models, including ChatGPT, do not possess true understanding or the ability to evaluate information. They rely on humans to perform these tasks. They are parasitic, dependent on human knowledge and labor. When new sources are added to their training data sets, they require retraining to incorporate that information. They cannot independently assess the accuracy of news reports, evaluate arguments, make statements consistent with encyclopedia pages, or accurately summarize movies. Humans are crucial in performing these tasks for them.

In short, rather than heralding completely independent AI, large language models reveal the significant reliance of AI systems on not only their designers and maintainers but also their users. So, if ChatGPT provides helpful or valid answers, remember to acknowledge the thousands or millions of hidden individuals who contributed to the words it processes and taught it what constitutes a good or bad answer.

Far from being autonomous superintelligences, models like ChatGPT, like all technologies, are nothing without us.

Unique Perspective:
As fascinating as large language model AI systems like ChatGPT are, it is essential to recognize their limitations and the interdependence between humans and technology. While these models can analyze vast amounts of data and generate text, they lack true understanding and independent thinking. Their intelligence is derived from the collective effort of thousands of individuals who provide feedback, create training data, and refine the models’ responses. In this sense, the advancement of AI not only highlights the potential of technology but also emphasizes the importance of human involvement and expertise in shaping its development. We, as humans, play a crucial role in guiding and directing AI systems to ensure their responsible and beneficial use for society.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button